Skip to main content

Selected Prompt Details

After selecting a prompt from the Prompt Gallery, you will be directed to a detailed interface where you can interact with the prompt and tailor its settings to meet your unique requirements. This page offers an interactive conversation-based interface, enabling you to input data and receive real-time responses from the AI model.

Interaction Interface

The interaction interface for each prompt is structured to facilitate easy input and output exchange. It allows you to interact with the model, providing data or queries and receiving processed responses instantly. For example, by selecting the Audio Diarization prompt, you can upload audio files or text, and the model will return transcribed content with speaker separation in real time.

Conversation Flow

  • User Input: You can enter various types of data, such as audio files, text, or code snippets, depending on the task of the selected prompt.
  • Model Output: The AI model processes the input and generates an appropriate response. For instance, if you're transcribing audio, the model will output a transcription, or if reviewing code, the output will contain feedback or suggestions.
  • System Instructions: Optionally, you can provide additional instructions that control the tone, style, or focus of the AI's response. This ensures that the output meets your specific expectations.

The conversation flow is clearly labeled as User or Model, making it easy to track interactions and responses.


Customizing the Prompt Settings

On the right side of the page, you will find the Run Settings panel, where you can adjust a variety of parameters to control the AI model’s behavior and response generation:

  • Model Selection: Choose the AI model you wish to use from a dropdown list, such as Gemini 1.5 Flash, based on the task you're working on.
  • Token Count: Define the maximum number of tokens (words/characters) the model can use in its output. This helps control the length of the response.
  • Temperature: Adjust the temperature to affect the creativity of the model. Higher values yield more varied and creative outputs, while lower values produce more focused, deterministic responses.
  • JSON Mode: Toggle this mode to process both input and output in JSON format, which is particularly useful for structured data exchanges or API integrations.
  • Code Execution: Enable or disable code execution for prompts that require running code as part of the workflow. This setting is essential for tasks such as code review or code generation.
  • Safety Settings: Configure these to ensure that the AI-generated responses adhere to specific content safety guidelines, such as avoiding inappropriate language or sensitive topics.

By adjusting these settings, you can fine-tune the behavior of the AI and optimize the output for your specific project or task.


Saving and Running the Prompt

Once you've customized the settings to your liking, click the Run button to execute the prompt. The system will process the input based on your selected model and configurations, and display the output in real time. If the generated output meets your expectations, you can save the settings and response for future use.

Additionally, you have the option to save a copy of the conversation for later analysis or reuse in other projects. This feature allows you to leverage previous interactions and responses, streamlining your workflows and enhancing productivity.